skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Cohen, Albert"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available March 1, 2026
  2. We consider a parametric elliptic PDE with a scalar piecewise constant diffusion coefficient taking arbitrary positive values on fixed subdomains. This problem is not uniformly elliptic, as the contrast can be arbitrarily high, contrary to the Uniform Ellipticity Assumption (UEA) that is commonly made on parametric elliptic PDEs.We construct reduced model spaces that approximate uniformly well all solutions with estimates in relative error that are independent of the contrast level. These estimates are sub-exponential in the reduced model dimension, yet exhibiting the curse of dimensionality as the number of subdomains grows. Similar estimates are obtained for the Galerkin projection, as well as for the state estimation and parameter estimation inverse problems. A key ingredient in our construction and analysis is the study of the convergence towards limit solutions of stiff problems when diffusion tends to infinity in certain domains. 
    more » « less
  3. null (Ed.)
    Typical model reduction methods for parametric partial differential equations construct a linear space V n which approximates well the solution manifold M consisting of all solutions u ( y ) with y the vector of parameters. In many problems of numerical computation, nonlinear methods such as adaptive approximation, n -term approximation, and certain tree-based methods may provide improved numerical efficiency over linear methods. Nonlinear model reduction methods replace the linear space V n by a nonlinear space Σ n . Little is known in terms of their performance guarantees, and most existing numerical experiments use a parameter dimension of at most two. In this work, we make a step towards a more cohesive theory for nonlinear model reduction. Framing these methods in the general setting of library approximation, we give a first comparison of their performance with the performance of standard linear approximation for any compact set. We then study these methods for solution manifolds of parametrized elliptic PDEs. We study a specific example of library approximation where the parameter domain is split into a finite number N of rectangular cells, with affine spaces of dimension m assigned to each cell, and give performance guarantees with respect to accuracy of approximation versus m and N . 
    more » « less
  4. Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in Binev et al. ( SIAM J. Math. Anal. 43 (2011) 1457–1472) and DeVore et al. ( Constructive Approximation 37 (2013) 455–466), where it is shown that the reduced basis space V n of dimension n , constructed by a certain greedy strategy, has approximation error similar to that of the optimal space associated to the Kolmogorov n -width of the solution manifold. The greedy construction of the reduced basis space is performed in an offline stage which requires at each step a maximization of the current error over the parameter space. For the purpose of numerical computation, this maximization is performed over a finite training set obtained through a discretization of the parameter domain. To guarantee a final approximation error ε for the space generated by the greedy algorithm requires in principle that the snapshots associated to this training set constitute an approximation net for the solution manifold with accuracy of order ε . Hence, the size of the training set is the ε covering number for M and this covering number typically behaves like exp( Cε −1/s ) for some C  > 0 when the solution manifold has n -width decay O ( n −s ). Thus, the shear size of the training set prohibits implementation of the algorithm when ε is small. The main result of this paper shows that, if one is willing to accept results which hold with high probability, rather than with certainty, then for a large class of relevant problems one may replace the fine discretization by a random training set of size polynomial in ε −1 . Our proof of this fact is established by using inverse inequalities for polynomials in high dimensions. 
    more » « less
  5. null (Ed.)